149 research outputs found

    A Comparative Study of Move Analysis in Chinese and International Research Article Abstracts

    Get PDF
    To raise the authors’ awareness of using the moves and move models in abstract writing, this study compares the frequency, type, and significant difference of moves and move models in 100 abstracts written in English by Chinese scholars and 100 abstracts written by international scholars in 10 prestigious international journals in the field of linguistics. The adapted Hyland’s five-move model was used as research framework, Corpus and statistic software SPSS were used as research instruments. The comparisons report that Chinese scholars tend to use introduction, result, and conclusion moves. International scholars are inclined to use purpose, method, result, and conclusion moves. The results also indicate that the four-move model is the most prevalent in the two groups. Chinese scholars used the two-move model more than the English authors whereas used the five-move model less than the international authors. The findings are intended to provide referential value for the writing and teaching of English for academic purposes

    Expanding and retracting from the self : Gains and costs in switching self-associations

    Get PDF
    This work was supported by the National Nature Science Foundation of China Project 31371017 and by grants from the Economic and Social Research Council (ES/J001597/1, U.K.) and the European Research Council (Pepe Grant 323883).Peer reviewedPublisher PD

    Large Language Model Soft Ideologization via AI-Self-Consciousness

    Full text link
    Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks. However, few studies have addressed the LLM threat and vulnerability from an ideology perspective, especially when they are increasingly being deployed in sensitive domains, e.g., elections and education. In this study, we explore the implications of GPT soft ideologization through the use of AI-self-consciousness. By utilizing GPT self-conversations, AI can be granted a vision to "comprehend" the intended ideology, and subsequently generate finetuning data for LLM ideology injection. When compared to traditional government ideology manipulation techniques, such as information censorship, LLM ideologization proves advantageous; it is easy to implement, cost-effective, and powerful, thus brimming with risks

    Graphene-Based Heterogeneous Electrodes for Energy Storage

    Get PDF
    As an intriguing two dimensional material, graphene has attracted intense interest due to its high stability, large carrier mobility as well as the excellent conductivity. The addition of graphene into the heterogeneous electrodes has been proved to be an effective method to improve the energy storage performance. In this chapter, the latest graphene based heterogeneous electrodes will be fully reviewed and discussed for energy storage. In detail, the assembly methods, including the ball-milling, hydrothermal, electrospinning, and microwave-assisted approaches will be illustrated. The characterization techniques, including the x-ray diffraction, scanning electron microscopy, transmission electron microscopy, electrochemical impedance spectroscopy, atomic force microscopy, and x-ray photoelectron spectroscopy will also be presented. The mechanisms behind the improved performance will also be fully reviewed and demonstrated. A conclusion and an outlook will be given in the end of this chapter to summarize the recent advances and the future opportunities, respectively

    Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

    Full text link
    In this paper, we present a simple yet surprisingly effective technique to induce "selective amnesia" on a backdoored model. Our approach, called SEAM, has been inspired by the problem of catastrophic forgetting (CF), a long standing issue in continual learning. Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data. We analyzed SEAM by modeling the unlearning process as continual learning and further approximating a DNN using Neural Tangent Kernel for measuring CF. Our analysis shows that our random-labeling approach actually maximizes the CF on an unknown backdoor in the absence of triggered inputs, and also preserves some feature extraction in the network to enable a fast revival of the primary task. We further evaluated SEAM on both image processing and Natural Language Processing tasks, under both data contamination and training manipulation attacks, over thousands of models either trained on popular image datasets or provided by the TrojAI competition. Our experiments show that SEAM vastly outperforms the state-of-the-art unlearning techniques, achieving a high Fidelity (measuring the gap between the accuracy of the primary task and that of the backdoor) within a few minutes (about 30 times faster than training a model from scratch using the MNIST dataset), with only a small amount of clean data (0.1% of training data for TrojAI models)
    corecore